It’s interesting to me how chill people sometimes are about the non-extinction future AI scenarios. Like, there seem to be opinions around along the lines of “pshaw, it might ruin your little sources of ‘meaning’, Luddite, but we have always had change and as long as the machines are pretty near the mark on rewiring your brain it will make everything amazing”. Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant, and want a lot of guarantees about the preservation of various very specific things they care about in life, and not be just like “oh sure, NYC has higher GDP/capita than my current city, sounds good”.
I read this as a lack of engaging with the situation as real. But possibly my sense that a non-negligible number of people have this flavor of position is wrong.
A big difference is that assuming you’re talking about futures in which AI hasn’t catastrophic outcomes, no one will be forcibly mandated to do anything.
Another important point is that, sure, people won’t need to do work, which means they will be unnecessary to the economy, barring some pretty sharp human enhancement. But this downside, along with all the other downsides, looks extremely small compared to the non-AGI default of dying of aging and having a 1⁄3 chance of getting dementia, 40% chance of getting cancer, your loved ones dying, etc.
This isn’t clear to me: does every option that involves someone being forcibly mandated to do something qualify as a catastrophe? Conceptually, there seems to be a lot of room between the two.
I understand the analogy in Katja’s post as being: even in a great post-AGI world, everyone is forced to move to a post-AGI world. That world has higher GDP/capita, but it doesn’t necessarily contain the specific things people value about their current lives.
Just listing all the positive aspects of living in NYC (even if they’re very positive) might not remove all hesitation: I know my local community, my local parks, the beloved local festival that happens in August.
If all diseases have been cured in NYC and I’m hesitant because I’ll miss out on the festival, I’m probably not adequately taking the benefits into account. But if you tell me not to worry at all about moving to NYC, you’re also not taking all the costs into account / aren’t talking in a way that will connect with me.
Why do you believe this? It seems to me that in the unlikely event that the AI doesn’t exterminate humanity, it’s much more likely to be aligned with the expressed values of whoever has their hands on the controls at the moment of no return, than to an overriding commitment to universal individual choice.
Not sure I love this analogy — moving to NYC doesn’t seem like that big of a deal —, but I do think it’s pretty messed up to be imposing huge social / technological / societal changes on 8 billion of your peers. I expect most of the people building AGI have not really grasped the ethical magnitude of doing this — I think I sort of have, but also I don’t build AGI.
You seem to be privileging the status quo. Refraining from doing that has equally large effects on your peers.
Things staying mostly the same doesn’t seem to count as a “large effect”. For example, we wouldn’t say taking a placebo pill has a large effect.
I’m not as chill as all that, and I absolutely appreciate people worrying about those dimensions. But I do tend to act in day-to-day behavior (and believe, in the average sense (my probablistic belief range incudes a lot of scenarios, but the average and median are somewhat close together, which is probably a sign of improper heuristics)) as if it’ll all be mostly-normal. I recently turned down a very good job offer in NYC (and happily, later found a better one in Seattle), but I see the analogy, and kind of agree it’s a good one, but from the other side—even people who think they’d hate NYC are probably wrong—hedonic adaptation is amazingly strong. I’ll try to represent those you’re frustrated with.
There will absolutely be changes, many of which will be uncomfortable, and probably regress from my peak-preference. As long as it’s not extinction or effective-extinction (a few humans kept in zoos or the like, but economically unimportant to the actual intelligent agents shaping the future), it’ll be … OK. Not necessarily great compared to imaginary utopias, but far better than the worst outcomes. Almost certainly better than any ancient person could have expected.
Do you really mean to indicate that not running everything is equivalent to extinction?
Pretty much, yes. Total loss of power and value is pretty much slow/delayed extinction. It’s certainly cultural extinction.
Note that I forgot to say that I put some weight/comfort in thinking there are some parts of mindspace which an AI could include, which are nearly as good (or maybe better) than biologicals. Once everyone I know and everyone THEY know are dead, and anything I recognize as virtues are mutated beyond my recognition, it’s not clear what preferences I would have about the ongoing civilization. Maybe extinction is an acceptible outcome.
What does “value” mean here? I seriously don’t know what you mean by “total loss of value”. Is this tied to your use of “economically important”?
I personally don’t give a damn for anybody else depending on me as the source of anything they value, at least not with respect to anything that’s traditionally spoken of as “economic”. In fact I would prefer that they could get whatever they wanted without involving me, and i could get whatever I wanted without involving them.
And power over what? Most people right this minute have no significant power over the wide-scale course of anything.
I thought “extinction”, whether for a species or a culture, had a pretty clear meaning: It doesn’t exist any more. I can’t see how that’s connected to anything you’re talking about.
I do agree with you about human extinction not necessarily being the end of the world, depending on how it happens and what comes afterwards… but I can’t see how loss of control, or value, or whatever, is connected to anything that fits the word “extinction”. Not physical, not cultural, not any kind.
“value” means “net positive to the beings making decisions that impact me”. Humans claim to and behave as if they care about other humans, even when those other humans are distant statistical entities, not personally-known.
The replacement consciousnesses will almost certainly not feel the same way about “legacy beings”, and to the extent they preserve some humans, it won’t be because they care about them as people, it’ll be for more pragmatic purposes. And this is a very fragile thing, unlikely to last more than a few thousand years.
Sure, but they can’t, and you can’t. They can only get what other humans give/trade/allow to them, and you are in the same boat. “whatever you want” includes limited exclusive-use resources, and if it’s more valuable (overall, for the utility functions of whatever’s making the decisions) to eliminate you than to share those resources, you’ll be eliminated.
I think I understand what you mean.
There are definitely possible futures worse than extinction. And some fairly likely ones that might not be worse than extinction but would still suck big time. Varying from comparable to a forced move to a damnsight worse than moving to anywhere that presently exists. I’m old enough to have already had some disappointments (alongside some positive surprises) about how the “future” has turned out. I could easily see how I could get a lot worse ones.
But what are we meant to do with what you’ve posted and how you’ve framed it?
Also, if somebody does have the “non-extinction ⇒ good” mindset, I suspect they’ll be prone to read your post as saying that change in itself is unacceptable, or at least that any change that every single person doesn’t agree to is unacceptable. Which is kind of a useless position since, yeah, there will always be change, and things not changing will also always make some people unhappy.
I’ve gotta say that, even though I definitely worry about non-extinction dystopias, and think that they are, in the aggregate, more probable than extinction scenarios… your use of the word “meaning” really triggered me. That truly is a word people use really incoherently.
Maybe some more concrete concerns?
From Cognitive Biases Potentially Affecting Judgment of Global Risks:
My experience has also been this being very true.
Counterpoint: “so you’re saying I could guarantee taking every single last one of those motherfuckers to the grave with me?”
Weird coincidence: I was just thinking about Leopold’s bunker concept from his essay. It was a pretty careless paper overall but the imperative to put alignment research in a bunker makes perfect sense; I don’t see the surface as a viable place for people to get serious work done (at least, not in densely populated urban areas; somewhere in the desert would count as a “bunker” in this case so long as it’s sufficiently distant from passerbys and the sensors and compute in their phones and cars).
Of course, this is unambiguously a necessary evil that a tiny handful of people are going to have to choose to live in a sad uncomfortable place for a while, and only because there’s no other option and it’s obviously the correct move for everyone everywhere including the people in the bunker.
Until the basics of the situation start somehow getting taught in the classrooms or something, we’re going to be stuck with a ludicrously large proportion of people satisfied with the kind of bite-sized convenient takes that got us into this whole unhinged situation in the first place (or have no thoughts at all).
If a tech company forced me to move to NYC, I would object for a combination of two separate reasons: 1--any change in my life is going to be hard—it may take me away from people I know, I need to learn the geography again, I live in Lothlórien right now and if I move to NYC nobody speaks Quenya, etc. And 2--things that are specific about NYC above and beyond the fact that change is going to be a problem by itself; for instance, I might hate subways, and I might hate subways whether I’m exposed to lots of them or not.
#2 can be a personal problem for me, but I notice that people in NYC aren’t, on the average, in general less happy than people who live elsewhere, so it seems like #2 isn’t a real issue when averaged over the whole population. #1 can be an issue even averaged over the whole population, of course, but #1 isn’t unique to moving to NYC, and applies to a whole bunch of other changes to the point where it’s most of the way to being a fully general argument against any change.
I’d expect the same to be true in the case of AI: The “change is a problem” component is negative, but it’s no worse than any other sort of change, and the “AI specifically is a problem” component would include some people who are harmed and some people who benefit and overall it’s going to be a wash.
Or to put it another way, just because I wouldn’t want a tech company to move me to NYC, that doesn’t imply that NYC is a worse place to live than where I am now.
Each time you can also apply this argument in reverse: I don’t like X about my city, so I’m happy that in the figure, the company will relocate me to NYC. And since NYC is presumed to be overall better, there are more instances of the latter rather than the former.
It seems to me you are taking the argument seriously, but very selectively.
(I think both kinds of thoughts pretty often, and I’m overall happy about the incoming move).
I’d give my right eye in exchange for the chance to live in NYC.
NYC is a great city. Many tech workers I know are trying to move to NYC, especially younger ones. So, not the best example, but I get your point.
I think the point by the OP is that while YOU might think NYC is a great place, not everybody does. One of the nice things about the current model is that you can move to NYC if you want to, but you don’t have to. In the hypothetical All-AGI All Around The World future, you get moved there whether or not you like it. Some people will, but it’s worth thinking about the people who won’t like it and consider what you might do to make that future better for them as well.
I think this post was supposed to be some sort of gotcha to SF AI optimists given how this is worded
but in reality a lot of tech workers without family here would gladly move to NYC.[1]
A better example would be Dubai. Objectively not a bad city and you can possibly make a lot more without tax, but obvious reasons you’d be hesitant. Still don’t think this is that huge of a gotcha. The type of people this post is targeting are generally risk-tolerant. So yeah if you effectively tripled their pay and made them move to Dubai, they’d take it with high likelihood.
I don’t get the misses the point reaction, as I’m pretty sure this was the true motivation of this post, think about it. Who could they be talking about where a NYC relocation is within the realm of possibility, are tech workers, and are chill with AI transformations.
This seems like a rather silly argument. You can apply it to pretty much any global change, any technological progress. The world changes, and will change. You can be salty about it, or you can adapt.